Interval Markov Decision Processes with Multiple Objectives

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Markov Decision Processes with Multiple Long-Run Average Objectives

We consider Markov decision processes (MDPs) with multiple long-run average objectives. Such MDPs occur in design problems where one wishes to simultaneously optimize several criteria, for example, latency and power. The possible trade-offs between the different objectives are characterized by the Pareto curve. We show that every Pareto optimal point can be ε-approximated by a memoryless strate...

متن کامل

Synchronizing Objectives for Markov Decision Processes

We introduce synchronizing objectives for Markov decision processes (MDP). Intuitively, a synchronizing objective requires that eventually, at every step there is a state which concentrates almost all the probability mass. In particular, it implies that the probabilistic system behaves in the long run like a deterministic system: eventually, the current state of the MDP can be identified with a...

متن کامل

Interval Methods for Uncertain Markov Decision Processes

In this paper, the average cases of Markov decision processes with uncertainty is considered. That is, a controlled Markov set-chain model with a finite state and action space is developed by an interval arithmetic analysis, and we will find a Pareto optimal policy which maximizes the average expected rewards over all stationary policies under a new partial order. The Pareto optimal policies is...

متن کامل

Compositional Reasoning for Interval Markov Decision Processes

Model checking probabilistic CTL properties of Markov decision processes with convex uncertainties has been recently investigated by Puggelli et al. Such model checking algorithms typically suffer from the state space explosion. In this paper, we address probabilistic bisimulation to reduce the size of such an MDP while preserving the probabilistic CTL properties it satisfies. In particular, we...

متن کامل

Multiple-Environment Markov Decision Processes

We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are MDPs with a set of probabilistic transition functions. The goal in a MEMDP is to synthesize a single controller with guaranteed performances against all environments even though the environment is unknown a priori. While MEMDPs can be seen as a special class of partially observable MDPs, we show that several verificatio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ACM Transactions on Modeling and Computer Simulation

سال: 2019

ISSN: 1049-3301,1558-1195

DOI: 10.1145/3309683